这项工作提出了一种新的计算框架,用于学习用于真实数据集的明确生成模型。特别地,我们建议在包含多个独立的多维线性子空间组成的特征空间中的多类多维数据分发和{线性判别表示(LDR)}之间学习{\ EM闭环转录}。特别地,我们认为寻求的最佳编码和解码映射可以被配制为编码器和解码器之间的{\ em二手最小游戏的均衡点}。该游戏的自然实用功能是所谓的{\ em速率减少},这是一个简单的信息定理措施,用于特征空间中子空间类似的高斯的混合物之间的距离。我们的配方利用来自控制系统的闭环误差反馈的灵感,避免昂贵的评估和最小化数据空间或特征空间的任意分布之间的近似距离。在很大程度上,这种新的制定统一了自动编码和GaN的概念和益处,并自然将它们扩展到学习多级和多维实际数据的判别和生成}表示的设置。我们对许多基准图像数据集的广泛实验表明了这种新的闭环配方的巨大潜力:在公平的比较下,学习的解码器的视觉质量和编码器的分类性能是竞争力的,并且通常比基于GaN,VAE或基于GaN,VAE或基于GaN,VAE的方法更好的方法两者的组合。我们注意到所以,不同类别的特征在特征空间中明确地映射到大约{em独立的主管子空间};每个类中的不同视觉属性由每个子空间中的{\ em独立主体组件}建模。
translated by 谷歌翻译
近年来,将注意力纳入生物医学图像分割的深度学习架构,越来越兴趣。关注机制的模块化设计使得能够灵活地集成到卷积神经网络架构中,例如U-Net。无论是适当的应用,还有什么类型的注意力,以及在网络中包含注意力模块的位置,都是目前被忽视的重要考虑因素。在本文中,我们调查了焦点参数在调制关注中的作用,揭示了损失功能和网络中的注意力之间的联系。通过结合焦距罚款术语,我们将统一的焦点损失框架扩展到包括基于边界的损失。此外,我们开发一个简单和可解释的数据集和特定于模型的启发式,将焦点参数集成到挤压和激励块和注意门中,以三种验证的生物医学成像数据集上的较少数量的注意模块实现最佳性能,建议明智地使用注意力模块导致更好的性能和效率。
translated by 谷歌翻译
手动分割用作评估自动图像分割任务的神经网络的金标准。由于形状,颜色和纹理中相当大的异质性,在生物医学图像中划分物体边界特别困难,导致显着的间隙和帧内变异性。诸如软标签和距离惩罚期的方法,将全球转换应用于地面真理,重新定义了不确定性的损失功能。然而,全局操作是计算昂贵的,并且既不准确地反映出不确定性底层手动注释。在本文中,我们提出了边界不确定性,其使用形态学操作将软标签限制到对象边界,在地面真理标签中提供了不确定性的适当表示,并且可以适用于能够实现系统的强大模型训练,其中存在系统的手动分段错误。我们将边界不确定性纳入骰子损失,与软标签和距离加权罚款相比,在三种验证良好的生物医学成像数据集中实现了一致的性能。边界不确定性不仅可以更准确地反映分割过程,而且对分段错误也有效,并且具有更好的概括。
translated by 谷歌翻译
骰子相似度系数(DSC)是由于其鲁棒性对类不平衡的鲁造性而广泛使用的度量和损耗函数。然而,众所周知,DSC损失差异很差,导致在生物医学和临床实践中不能有效地解释的过度自信预测。性能通常是唯一用于评估深度神经网络产生的分段的指标,并且通常忽略校准。然而,校准对于译成生物医学和临床实践是重要的,为科学家和临床医生的解释提供了重要的语境信息。在这项研究中,我们将校准差,作为基于深度学习的生物医学图像分割的新出现挑战。我们提供了一个简单而有效的DSC丢失延伸,命名为DSC ++丢失,可选择地调制与过于自信,不正确的预测相关的罚款。作为独立损失功能,DSC ++损耗达到了在五个良好验证的开源生物医学成像数据集中对传统DSC损耗的显着提高了校准。同样,当将DSC ++丢失集成到基于四个DSC的损耗函数时,我们观察到显着改善。最后,我们使用SoftMax阈值化来说明校准的输出能够剪裁精度召回偏差,这是一种适应模型预测以适应生物医学或临床任务的重要的后处理技术。 DSC ++损失克服了DSC的主要限制,为训练生物医学和临床实践中使用的深度学习分段模型提供了合适的损耗功能。
translated by 谷歌翻译
自动分割方法是医学图像分析的重要进步。特别是机器学习技术和深度神经网络,是最先进的大多数医学图像分割任务。类别不平衡的问题在医疗数据集中构成了重大挑战,病变通常占据相对于背景的相对于较小的体积。深度学习算法培训中使用的损失函数对类别不平衡的鲁棒性不同,具有模型收敛的直接后果。分割最常用的损耗函数基于交叉熵损耗,骰子丢失或两者的组合。我们提出了统一的联络损失,是一种新的分层框架,它概括了骰子和基于跨熵的损失,用于处理类别不平衡。我们评估五个公共可用的损失功能,类不平衡的医学成像数据集:CVC-ClinicDB,船舶提取数字视网膜图像(驱动器),乳房超声波2017(Bus2017),脑肿瘤分割2020(Brats20)和肾肿瘤分割2019 (套件19)。我们将损耗功能性能与六个骰子或基于跨熵的损耗函数进行比较,横跨二进制二进制,3D二进制和3D多包子分段任务,展示我们所提出的损失函数对类不平衡具有强大,并且始终如一地优于其他丢失功能。源代码可用:https://github.com/mlyg/unified-focal-loss
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译